63 research outputs found
Estimating PM2.5 in the Beijing-Tianjin-Hebei Region Using MODIS AOD Products from 2014 to 2015
Fine particulate matter with a diameter less than 2.5 μm (PM2.5) has harmful impacts on regional climate, economic development and public health. The high PM2.5 concentrations in China’s urban areas are mainly caused by combustion of coal and gasoline, industrial pollution and unknown/uncertain sources. The Beijing-Tianjin-Hebei (BTH) region with a land area of 218,000 km2, which contains 13 cities, is the biggest urbanized region in northern China. The huge population (110 million, 8% of the China’s population), local heavy industries and vehicle emissions have resulted in severe air pollution. To monitor ground-level PM2.5 concentration, the Chinese government spent significant expense in building more than 1500 in-situ stations (79 stations in the BTH region). However, most of these stations are situated in urban areas. Besides, each station can only represent a limited area around that station, which leaves the vast rural land out of monitoring. In this situation, geographic information system and remote sensing can be used as complementary tools. Traditional models have used 10 km MODIS Aerosol Optical Depth (AOD) product and proved the statistical relationship between AOD and PM2.5. In 2014, the 3 km MODIS AOD product was released which made PM2.5 estimation with a higher resolution became possible.
This study presents an estimation on PM2.5 distribution in the BTH region from September 2014 to August 2015 by combining the MODIS satellite data, ground measurements of PM2.5, and meteorological documents. Firstly, the 3 km and 10 km MODIS AOD products were validated with AErosol RObotic NETwork (AERONET AOD. Then the MLR and GWR models were employed respectively to estimate PM2.5 concentrations using ground measurements and two MODIS AOD products, meteorological datasets and land use information. Seasonal and regional analyses were also followed to make a comparative study on strengths and weaknesses between the 3 km and 10 km AOD products. Finally, the number of non-accidental deaths attributed to the long-term exposure of PM2.5 in the BTH region was estimated spatially.
The results demonstrated that the 10 km AOD product provided results with a higher accuracy and greater coverage, although the 3 km AOD product could provide more information about the spatial variations of PM2.5 estimation. Additionally, compared with the global regression, the geographically weighed regression model was able to improve the estimation results. Finally, it was estimated that more than 30,000 people died in the BTH region during the study period attributed to the excessive PM2.5 concentrations
Circular Accessible Depth: A Robust Traversability Representation for UGV Navigation
In this paper, we present the Circular Accessible Depth (CAD), a robust
traversability representation for an unmanned ground vehicle (UGV) to learn
traversability in various scenarios containing irregular obstacles. To predict
CAD, we propose a neural network, namely CADNet, with an attention-based
multi-frame point cloud fusion module, Stability-Attention Module (SAM), to
encode the spatial features from point clouds captured by LiDAR. CAD is
designed based on the polar coordinate system and focuses on predicting the
border of traversable area. Since it encodes the spatial information of the
surrounding environment, which enables a semi-supervised learning for the
CADNet, and thus desirably avoids annotating a large amount of data. Extensive
experiments demonstrate that CAD outperforms baselines in terms of robustness
and precision. We also implement our method on a real UGV and show that it
performs well in real-world scenarios.Comment: 13 pages, 8 figure
Splitting of surface defect partition functions and integrable systems
We study Bethe/gauge correspondence at the special locus of Coulomb moduli
where the integrable system exhibits the splitting of degenerate levels. For
this investigation, we consider the four-dimensional pure
supersymmetric gauge theory, with a half-BPS surface defect constructed
with the help of an orbifold or a degenerate gauge vertex. We show that the
non-perturbative Dyson-Schwinger equations imply the Schr\"odinger-type and the
Baxter-type differential equations satisfied by the respective surface defect
partition functions. At the special locus of Coulomb moduli the surface defect
partition function splits into parts. We recover the Bethe/gauge dictionary for
each summand.Comment: 34 pages, 2 figures; v2. published versio
Evaluation of Hybrid VMAT Advantages and Robustness Considering Setup Errors Using Surface Guided Dose Accumulation for Internal Lymph Mammary Nodes Irradiation of Postmastectomy Radiotherapy
ObjectivesSetup error is a key factor affecting postmastectomy radiotherapy (PMRT) and irradiation of the internal mammary lymph nodes is the most investigated aspect for PMRT patients. In this study, we evaluated the robustness, radiobiological, and dosimetric benefits of the hybrid volumetric modulated arc therapy (H-VMAT) planning technique based on the setup error in dose accumulation using a surface-guided system for radiation therapy.MethodsWe retrospectively selected 32 patients treated by a radiation oncologist and evaluated the clinical target volume (CTV), including internal lymph node irradiation (IMNIs), and considered the planning target volume (PTV) margin to be 5 mm. Three different planning techniques were evaluated: tangential-VMAT (T-VMAT), intensity-modulated radiation therapy (IMRT), and H-VMAT. The interfraction and intrafraction setup errors were analyzed in each field and the accumulated dose was evaluated as the patients underwent daily surface-guided monitoring. These parameters were included while evaluating CTV coverage, the dose required for the left anterior descending artery (LAD) and the left ventricle (LV), the normal tissue complication probability (NTCP) for the heart and lungs, and the second cancer complication probability (SCCP) for contralateral breast (CB).ResultsWhen the setup error was accounted for dose accumulation, T-VMAT (95.51%) and H-VMAT (95.48%) had a higher CTV coverage than IMRT (91.25%). In the NTCP for the heart, H-VMAT (0.04%) was higher than T-VMAT (0.01%) and lower than IMRT (0.2%). However, the SCCP (1.05%) of CB using H-VMAT was lower than that using T-VMAT (2%) as well as delivery efficiency. And T-VMAT (3.72) and IMRT (10.5).had higher plan complexity than H-VMAT (3.71).ConclusionsIn this study, based on the dose accumulation of setup error for patients with left-sided PMRT with IMNI, we found that the H-VMAT technique was superior for achieving an optimum balance between target coverage, OAR dose, complication probability, plan robustness, and complexity
CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP
Contrastive Language-Image Pre-training (CLIP) achieves promising results in
2D zero-shot and few-shot learning. Despite the impressive performance in 2D,
applying CLIP to help the learning in 3D scene understanding has yet to be
explored. In this paper, we make the first attempt to investigate how CLIP
knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet
effective framework that transfers CLIP knowledge from 2D image-text
pre-trained models to a 3D point cloud network. We show that the pre-trained 3D
network yields impressive performance on various downstream tasks, i.e.,
annotation-free and fine-tuning with labelled data for semantic segmentation.
Specifically, built upon CLIP, we design a Semantic-driven Cross-modal
Contrastive Learning framework that pre-trains a 3D network via semantic and
spatial-temporal consistency regularization. For the former, we first leverage
CLIP's text semantics to select the positive and negative point samples and
then employ the contrastive loss to train the 3D network. In terms of the
latter, we force the consistency between the temporally coherent point cloud
features and their corresponding image features. We conduct experiments on
SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained
network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08%
mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100%
labelled data, our method significantly outperforms other self-supervised
methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we
demonstrate the generalizability for handling cross-domain datasets. Code is
publicly available https://github.com/runnanchen/CLIP2Scene.Comment: CVPR 202
LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion
LiDAR-camera fusion methods have shown impressive performance in 3D object
detection. Recent advanced multi-modal methods mainly perform global fusion,
where image features and point cloud features are fused across the whole scene.
Such practice lacks fine-grained region-level information, yielding suboptimal
fusion performance. In this paper, we present the novel Local-to-Global fusion
network (LoGoNet), which performs LiDAR-camera fusion at both local and global
levels. Concretely, the Global Fusion (GoF) of LoGoNet is built upon previous
literature, while we exclusively use point centroids to more precisely
represent the position of voxel features, thus achieving better cross-modal
alignment. As to the Local Fusion (LoF), we first divide each proposal into
uniform grids and then project these grid centers to the images. The image
features around the projected grid points are sampled to be fused with
position-decorated point cloud features, maximally utilizing the rich
contextual information around the proposals. The Feature Dynamic Aggregation
(FDA) module is further proposed to achieve information interaction between
these locally and globally fused features, thus producing more informative
multi-modal features. Extensive experiments on both Waymo Open Dataset (WOD)
and KITTI datasets show that LoGoNet outperforms all state-of-the-art 3D
detection methods. Notably, LoGoNet ranks 1st on Waymo 3D object detection
leaderboard and obtains 81.02 mAPH (L2) detection performance. It is noteworthy
that, for the first time, the detection performance on three classes surpasses
80 APH (L2) simultaneously. Code will be available at
\url{https://github.com/sankin97/LoGoNet}.Comment: Accepted by CVPR202
Rethinking Range View Representation for LiDAR Segmentation
LiDAR segmentation is crucial for autonomous driving perception. Recent
trends favor point- or voxel-based methods as they often yield better
performance than the traditional range view representation. In this work, we
unveil several key factors in building powerful range view models. We observe
that the "many-to-one" mapping, semantic incoherence, and shape deformation are
possible impediments against effective learning from range view projections. We
present RangeFormer -- a full-cycle framework comprising novel designs across
network architecture, data augmentation, and post-processing -- that better
handles the learning and processing of LiDAR point clouds from the range view.
We further introduce a Scalable Training from Range view (STR) strategy that
trains on arbitrary low-resolution 2D range images, while still maintaining
satisfactory 3D segmentation accuracy. We show that, for the first time, a
range view method is able to surpass the point, voxel, and multi-view fusion
counterparts in the competing LiDAR semantic and panoptic segmentation
benchmarks, i.e., SemanticKITTI, nuScenes, and ScribbleKITTI.Comment: ICCV 2023; 24 pages, 10 figures, 14 tables; Webpage at
https://ldkong.com/RangeForme
UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase
Point-, voxel-, and range-views are three representative forms of point
clouds. All of them have accurate 3D measurements but lack color and texture
information. RGB images are a natural complement to these point cloud views and
fully utilizing the comprehensive information of them benefits more robust
perceptions. In this paper, we present a unified multi-modal LiDAR segmentation
network, termed UniSeg, which leverages the information of RGB images and three
views of the point cloud, and accomplishes semantic segmentation and panoptic
segmentation simultaneously. Specifically, we first design the Learnable
cross-Modal Association (LMA) module to automatically fuse voxel-view and
range-view features with image features, which fully utilize the rich semantic
information of images and are robust to calibration errors. Then, the enhanced
voxel-view and range-view features are transformed to the point space,where
three views of point cloud features are further fused adaptively by the
Learnable cross-View Association module (LVA). Notably, UniSeg achieves
promising results in three public benchmarks, i.e., SemanticKITTI, nuScenes,
and Waymo Open Dataset (WOD); it ranks 1st on two challenges of two benchmarks,
including the LiDAR semantic segmentation challenge of nuScenes and panoptic
segmentation challenges of SemanticKITTI. Besides, we construct the OpenPCSeg
codebase, which is the largest and most comprehensive outdoor LiDAR
segmentation codebase. It contains most of the popular outdoor LiDAR
segmentation algorithms and provides reproducible implementations. The
OpenPCSeg codebase will be made publicly available at
https://github.com/PJLab-ADG/PCSeg.Comment: ICCV 2023; 21 pages; 9 figures; 18 tables; Code at
https://github.com/PJLab-ADG/PCSe
Development of a nested PCR assay for specific detection of Metschnikowia bicuspidata infecting Eriocheir sinensis
In recent years, the “milky disease” caused by Metschnikowia bicuspidata has seriously affected the Eriocheir sinensis culture industry. Discovering and blocking the transmission route has become the key to controlling this disease. The existing polymerase chain reaction (PCR) detection technology for M. bicuspidata uses the ribosomal DNA (rDNA) sequence, but low sensitivity and specificity lead to frequent false detections. We developed a highly specific and sensitive nested PCR method to detect M. bicuspidata, by targeting the hyphally regulated cell wall protein (HYR) gene. This nested HYR-PCR produced a single clear band, showed no cross-reaction with other pathogens, and was superior to rDNA-PCR in specificity and sensitivity. The sensitivity of nested HYR-PCR (6.10 × 101 copies/μL) was greater than those of the large subunit ribosomal RNA gene (LSU rRNA; 6.03 × 104 copies/μL) and internal transcribed spacer (ITS; 6.74 × 105 copies/μL) PCRs. The nested HYR-PCR also showed a higher positivity rate (71.1%) than those obtained with LSU rRNA (16.7%) and ITS rDNA (24.4%). In conclusion, we developed a new nested HYR-PCR method for the specific and sensitive detection of M. bicuspidata infection. This will help to elucidate the transmission route of M. bicuspidata and to design effective management and control measures for M. bicuspidata disease
- …